NSVQ: Noise Substitution in Vector Quantization for Machine Learning

نویسندگان

چکیده

Machine learning algorithms have been shown to be highly effective in solving optimization problems a wide range of applications. Such typically use gradient descent with backpropagation and the chain rule. Hence, fails if intermediate gradients are zero for some functions computational graph, because it causes collapse when multiplying zero. Vector quantization is one those challenging machine algorithms, since piece-wise constant function its almost everywhere. A typical solution apply straight through estimator which simply copies over vector backpropagation. Other solutions based on smooth or stochastic approximation. This study proposes technique called NSVQ, approximates behavior by substituting multiplicative noise so that can used problems. Specifically, error replaced product original normalized vector, samples drawn from zero-mean, unit-variance normal distribution. We test our proposed NSVQ three scenarios various types Based experiments, achieves more accuracy faster convergence comparison estimator, exponential moving averages, MiniBatchKmeans approaches.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Vector Quantization for Machine Vision

This paper shows how to reduce the computational cost for a variety of common machine vision tasks by operating directly in the compressed domain, particularly in the context of hardware acceleration. Pyramid Vector Quantization (PVQ) is the compression technique of choice and its properties are exploited to simplify Support Vector Machines (SVM), Convolutional Neural Networks(CNNs), Histogram ...

متن کامل

Combine Vector Quantization and Support Vector Machine for Imbalanced Datasets

In cases of extremely imbalanced dataset with high dimensions, standard machine learning techniques tend to be overwhelmed by the large classes. This paper rebalances skewed datasets by compressing the majority class. This approach combines Vector Quantization and Support Vector Machine and constructs a new approach, VQ-SVM, to rebalance datasets without significant information loss. Some issue...

متن کامل

Generalized Learning Vector Quantization

We propose a new learning method, "Generalized Learning Vector Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and ...

متن کامل

Habituation in Learning Vector Quantization

A modification of Kohonen's Learning Vector Quanti zation is proposed to hand le hard cases of supervised learning with a rugged decision surface or asymmetries in the input dat a structure. Cell reference points (neurons) are forced to move close to the decision surface by successively omit ting input data that do not find a neuron of the opposite class within a circle of shrinking radius . Th...

متن کامل

Matrix Learning in Learning Vector Quantization

We propose a new matrix learning scheme to extend Generalized Relevance Learning Vector Quantization (GRLVQ), an efficient prototype-based classification algorithm. By introducing a full matrix of relevance factors in the distance measure, correlations between different features and their importance for the classification scheme can be taken into account and automated, general metric adaptation...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2022

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2022.3147670